Carbon Flux

Right click to download this notebook from GitHub.


FluxNet is a worldwide collection of sensor stations that record a number of local variables relating to atmospheric conditions, solar flux and soil moisture. This notebook visualizes the data used in the NASA Goddard/University of Alabama carbon monitoring project NEE Data Fusion (Grey Nearing et al., 2018), but using Python tools rather than Matlab.

The scientific goals of this notebook are to:

  • examine the carbon flux measurements from each site (net C02 ecosystem exchange, or NEE)
  • determine the feasibility of using a model to predict the carbon flux at one site from every other site.
  • generate and explain model

The "meta" goal is to show how Python tools let you solve the scientific goals, so that you can apply these tools to your own problems.

In [1]:
import sys
import dask
import numpy as np
import pandas as pd

import holoviews as hv

import hvplot.pandas
import geoviews.tile_sources as gts

pd.options.display.max_columns = 10
hv.extension('bokeh', width=80)

Open the intake catalog

This notebook uses intake to set up a data catalog with instructions for loading data for various projects. Before we read in any data, we'll open that catalog file and inspect the various data sources:

In [2]:
import intake

cat = intake.open_catalog('../catalog.yml')
list(cat)
Out[2]:
['landsat_5_small',
 'landsat_8_small',
 'landsat_5',
 'landsat_8',
 'google_landsat_band',
 'amazon_landsat_band',
 'fluxnet_daily',
 'fluxnet_metadata',
 'seattle_lidar']

Load metadata

First we will load in the fluxnet_metadata containing some site information for each of the fluxnet sites. Included in these data are the lat and lon of each site and the vegetation encoding (more on this below). In the next cell we will read in these data and take a look at a random few lines:

In [3]:
metadata = cat.fluxnet_metadata().read()
metadata.sample(5)

Out[3]:
site lat lon igbp
153 AU-Wac -37.4259 145.1878 EBF
110 US-Seg 34.3623 -106.7020 GRA
130 US-Wrc 45.8205 -121.9519 ENF
212 DE-Seh 50.8706 6.4497 CRO
300 US-Me5 44.4372 -121.5668 ENF

The vegetation type is classified according to the categories set out in the International Geosphere–Biosphere Programme (igbd) with several additional categories defined on the fluxdata website.

In [4]:
igbp_vegetation = {
    'WAT': '00 - Water',
    'ENF': '01 - Evergreen Needleleaf Forest',
    'EBF': '02 - Evergreen Broadleaf Forest',
    'DNF': '03 - Deciduous Needleleaf Forest',
    'DBF': '04 - Deciduous Broadleaf Forest',
    'MF' : '05 - Mixed Forest',
    'CSH': '06 - Closed Shrublands',
    'OSH': '07 - Open shrublands',
    'WSA': '08 - Woody Savannas',
    'SAV': '09 - Savannas',
    'GRA': '10 - Grasslands',
    'WET': '11 - Permanent Wetlands',
    'CRO': '12 - Croplands',
    'URB': '13 - Urban and Built-up',
    'CNV': '14 - Cropland/Nartural Vegetation Mosaics',
    'SNO': '15 - Snow and Ice',
    'BSV': '16 - Baren or Sparsely Vegetated'
}

We can use the dictionary above to map from igbp codes to longer labels - creating a new column on our metadata. We will make this column an ordered categorical to improve visualizations.

In [5]:
from pandas.api.types import CategoricalDtype

dtype = CategoricalDtype(ordered=True, categories=sorted(igbp_vegetation.values()))
metadata['vegetation'] = (metadata['igbp']
                          .apply(lambda x: igbp_vegetation[x])
                          .astype(dtype))
metadata.sample(5)
Out[5]:
site lat lon igbp vegetation
292 US-LWW 34.9604 -97.9789 GRA 10 - Grasslands
243 IT-Isp 45.8126 8.6336 DBF 04 - Deciduous Broadleaf Forest
336 US-Wi8 46.7223 -91.2524 DBF 04 - Deciduous Broadleaf Forest
68 US-ICh 68.6068 -149.2958 OSH 07 - Open shrublands
89 US-Ro1 44.7143 -93.0898 CRO 12 - Croplands

Visualize the fluxdata sites

The PyViz ecosystem strives to make it always straightforward to visualize your data, to encourage you to be aware of it and understand it at each stage of a workflow. Here we will use Open Street Map tiles from geoviews to make a quick map of where the different sites are located and the vegetation at each site.

In [6]:
metadata.hvplot.points('lon', 'lat', geo=True, color='vegetation',
                       height=420, width=800, cmap='Category20') * gts.OSM
Out[6]:

Loading FluxNet data

The data in the nee_data_fusion repository is expressed as a collection of CSV files where the site names are expressed in the filenames.

This cell defines a function to:

  • read in the data from all sites
  • discard columns that we don't need
  • calculate day of year
  • caculate the season (spring, summer, fall, winter)
In [7]:
data_columns = ['P_ERA', 'TA_ERA', 'PA_ERA', 'SW_IN_ERA', 'LW_IN_ERA', 'WS_ERA',
                'VPD_ERA', 'TIMESTAMP', 'site', 'NEE_CUT_USTAR50']
soil_data_columns = ['SWC_F_MDS_1', 'SWC_F_MDS_2', 'SWC_F_MDS_3',
                     'TS_F_MDS_1', 'TS_F_MDS_2', 'TS_F_MDS_3']

keep_from_csv = data_columns + soil_data_columns

y_variable = 'NEE_CUT_USTAR50'

def season(df, metadata):
    """Add season column based on lat and month
    """
    site = df['site'].cat.categories.item()
    lat = metadata[metadata['site'] == site]['lat'].item()
    if lat > 0:
        seasons = {3: 'spring',  4: 'spring',  5: 'spring',
                   6: 'summer',  7: 'summer',  8: 'summer',
                   9: 'fall',   10: 'fall',   11: 'fall',
                  12: 'winter',  1: 'winter',  2: 'winter'}
    else:
        seasons = {3: 'fall',    4: 'fall',    5: 'fall',
                   6: 'winter',  7: 'winter',  8: 'winter',
                   9: 'spring', 10: 'spring', 11: 'spring',
                  12: 'summer',  1: 'summer',  2: 'summer'}
    return df.assign(season=df.TIMESTAMP.dt.month.map(seasons))

def clean_data(df):
    """
    Clean data columns:
    
    * add NaN col for missing columns
    * throw away un-needed columns
    * add day of year
    """
    df = df.assign(**{col: np.nan for col in keep_from_csv if col not in df.columns})
    df = df[keep_from_csv]
    
    df = df.assign(DOY=df.TIMESTAMP.dt.dayofyear)
    df = df.assign(year=df.TIMESTAMP.dt.year)
    df = season(df, metadata)
    
    return df

Read and clean data

This will take a few minutes if the data is not cached yet. First we will get a list of all the files on the S3 bucket, then we will iterate over those files and cache, read, and munge the data in each one. This is necessary since the columns in each file don't necessarily match the columns in the other files. Before we concatenate across sites, we need to do some cleaning.

In [8]:
from s3fs import S3FileSystem
In [9]:
s3 = S3FileSystem(anon=True)
s3_paths = s3.glob('earth-data/carbon_flux/nee_data_fusion/FLX*')
In [10]:
datasets = []
skipped = []
used = []

for i, s3_path in enumerate(s3_paths):
    sys.stdout.write('\r{}/{}'.format(i+1, len(s3_paths)))
    
    dd = cat.fluxnet_daily(s3_path=s3_path).to_dask()
    site = dd['site'].cat.categories.item()
    
    if not set(dd.columns) >= set(data_columns):
        skipped.append(site)
        continue

    datasets.append(clean_data(dd))
    used.append(site)

print()
print('Found {} fluxnet sites with enough data to use - skipped {}'.format(len(used), len(skipped)))
1/209
2/209
3/209
4/209
5/209
6/209
7/209
8/209
9/209
10/209
11/209
12/209
13/209
14/209
15/209
16/209
17/209
18/209
19/209
20/209
21/209
22/209
23/209
24/209
25/209
26/209
27/209
28/209
29/209
30/209
31/209
32/209
33/209
34/209
35/209
36/209
37/209
38/209
39/209
40/209
41/209
42/209
43/209
44/209
45/209
46/209
47/209
48/209
49/209
50/209
51/209
52/209
53/209
54/209
55/209
56/209
57/209
58/209
59/209
60/209
61/209
62/209
63/209
64/209
65/209
66/209
67/209
68/209
69/209
70/209
71/209
72/209
73/209
74/209
75/209
76/209
77/209
78/209
79/209
80/209
81/209
82/209
83/209
84/209
85/209
86/209
87/209
88/209
89/209
90/209
91/209
92/209
93/209
94/209
95/209
96/209
97/209
98/209
99/209
100/209
101/209
102/209
103/209
104/209
105/209
106/209
107/209
108/209
109/209
110/209
111/209
112/209
113/209
114/209
115/209
116/209
117/209
118/209
119/209
120/209
121/209
122/209
123/209
124/209
125/209
126/209
127/209
128/209
129/209
130/209
131/209
132/209
133/209
134/209
135/209
136/209
137/209
138/209
139/209
140/209
141/209
142/209
143/209
144/209
145/209
146/209
147/209
148/209
149/209
150/209
151/209
152/209
153/209
154/209
155/209
156/209
157/209
158/209
159/209
160/209
161/209
162/209
163/209
164/209
165/209
166/209
167/209
168/209
169/209
170/209
171/209
172/209
173/209
174/209
175/209
176/209
177/209
178/209
179/209

Now that we have a list of datasets, we will concatenate across all rows. Since the data is loaded lazily - using dask - we need to explicitly call compute to get the data in memory. To learn more about this look at the Data Ingestion tutorial.

In [11]:
data = dask.dataframe.concat(datasets).compute()
data.columns
Out[11]:
Index(['P_ERA', 'TA_ERA', 'PA_ERA', 'SW_IN_ERA', 'LW_IN_ERA', 'WS_ERA',
       'VPD_ERA', 'TIMESTAMP', 'site', 'NEE_CUT_USTAR50', 'SWC_F_MDS_1',
       'SWC_F_MDS_2', 'SWC_F_MDS_3', 'TS_F_MDS_1', 'TS_F_MDS_2', 'TS_F_MDS_3',
       'DOY', 'year', 'season'],
      dtype='object')

We'll also set the data type of 'site' to 'category'. This will come in handy later.

In [12]:
data['site'] = data['site'].astype('category')

Visualizing Data Available at Sites

We can look at the sites for which we have data. We'll plot the sites on a world map again - this time using a custom colormap to denote sites with valid data, sites where data exist but were not loaded because too many fields were missing, and sites where no data was available. In addition to this map we'll get the count of different vegetation types at the sites.

In [13]:
def mapper(x):
    if x in used:
        return 'valid'
    elif x in skipped:
        return 'skipped'
    else:
        return 'no data'
    
cmap = {'valid': 'green', 'skipped': 'red', 'no data': 'darkgray'}

QA = metadata.copy()
QA['quality'] = QA['site'].map(mapper)

all_points = QA.hvplot.points('lon', 'lat', geo=True, color='quality', 
                              cmap=cmap, hover_cols=['site', 'vegetation'],
                              height=420, width=600).options(tools=['hover', 'tap'], 
                                                             legend_position='top')

def veg_count(data):
    veg_count = data['vegetation'].value_counts().sort_index(ascending=False)
    return veg_count.hvplot.barh(height=420, width=500)

hist = veg_count(QA[QA.quality=='valid']).relabel('Vegetation counts for valid sites')

all_points * gts.OSM + hist
Out[13]:

We'll make a couple of functions that generate plots on the full set of data or a subset of the data. We will use these in a dashboard below.

In [14]:
def site_timeseries(data):
    """Timeseries plot showing the mean carbon flux at each DOY as well as the min and max"""
    
    tseries = hv.Overlay([
        (data.groupby(['DOY', 'year'])[y_variable]
             .mean().groupby('DOY').agg([np.min, np.max])
             .hvplot.area('DOY', 'amin', 'amax', alpha=0.2, fields={'amin': y_variable})),
        data.groupby('DOY')[y_variable].mean().hvplot()])
    
    return tseries.options(width=800, height=400)

def site_count_plot(data):
    """Plot of the number of observations of each of the non-mandatory variables."""
    return data[soil_data_columns + ['site']].count().hvplot.bar(rot=90, width=300, height=400)

timeseries = site_timeseries(data)
count_plot = site_count_plot(data)
timeseries + count_plot
Out[14]:

Dashboard

Using the plots and functions defined above, we can make a Panel dashboard of sites where by clicking on a site, you get the timeseries and variable count for that particular site.

In [15]:
from holoviews.streams import Selection1D
import panel as pn
In [16]:
stream = Selection1D(source=all_points)
empty = timeseries.relabel('No selection') + count_plot.relabel('No selection')

def site_selection(index):
    if not index:
        return empty
    i = index[0]
    if i in QA[QA.quality=='valid'].index:
        site = QA.iloc[i].site
        ts = site_timeseries(data[data.site == site]).relabel(site)
        ct = site_count_plot(data[data.site == site]).relabel(site)
        return ts + ct
    else:
        return empty

one_site = hv.DynamicMap(site_selection, streams=[stream])

pn.Column(pn.Row(all_points * gts.OSM, hist), pn.Row(one_site))
Out[16]:

Merge data

Now that the data are loaded in we can merge the daily data with the metadata from before.

In order to use the categorical igbp field with machine-learning tools, we will create a one-hot encoding where each column corresponds to one of the igbp types, the rows correspond to observations and all the cells are filled with 0 or 1. This can be done use the method pd.get_dummies:

In [17]:
onehot_metadata = pd.get_dummies(metadata, columns=['igbp'])
onehot_metadata.sample(5)
Out[17]:
site lat lon vegetation igbp_BSV ... igbp_SAV igbp_SNO igbp_WAT igbp_WET igbp_WSA
168 CA-NS5 55.8631 -98.4850 01 - Evergreen Needleleaf Forest 0 ... 0 0 0 0 0
38 US-Brw 71.3225 -156.6092 11 - Permanent Wetlands 0 ... 0 0 0 1 0
188 CH-Oe2 47.2863 7.7343 12 - Croplands 0 ... 0 0 0 0 0
244 IT-La2 45.9542 11.2853 01 - Evergreen Needleleaf Forest 0 ... 0 0 0 0 0
111 US-Ses 34.3349 -106.7442 07 - Open shrublands 0 ... 0 0 0 0 0

5 rows × 19 columns

We'll do the same for season - keeping season as a column.

In [18]:
data = pd.get_dummies(data, columns=['season']).assign(season=data['season'])

We'll merge the metadata with all our daily observations - creating a tidy dataframe.

In [19]:
df = pd.merge(data, onehot_metadata, on='site')
df.sample(5)
Out[19]:
P_ERA TA_ERA PA_ERA SW_IN_ERA LW_IN_ERA ... igbp_SAV igbp_SNO igbp_WAT igbp_WET igbp_WSA
14508 0.000 27.993 100.430 301.211 373.187 ... 0 0 0 0 0
195275 0.691 10.416 87.639 140.151 331.223 ... 0 0 0 0 0
509670 0.859 14.574 99.851 262.338 339.183 ... 0 0 0 0 0
408846 1.537 11.125 98.540 153.966 283.061 ... 0 0 0 0 0
433763 0.000 17.476 90.887 340.971 302.338 ... 0 0 0 0 0

5 rows × 41 columns

Visualizing Soil Data Availability at Sites

Now that all of our observations are merged with the site metadata, we can take a look at which sites have soil data. Some sites have soil moisture and temperature data at one depths and others have the data at all 3 depths. We'll look at the distribution of availability across sites.

In [20]:
partial_soil_data = df[df[soil_data_columns].notnull().any(1)]
partial_soil_data_sites = metadata[metadata.site.isin(partial_soil_data.site.unique())]
In [21]:
full_soil_data = df[df[soil_data_columns].notnull().all(1)]
full_soil_data_sites = metadata[metadata.site.isin(full_soil_data.site.unique())]
In [22]:
args = dict(geo=True, hover_cols=['site', 'vegetation'], height=420, width=600)

partial = partial_soil_data_sites.hvplot.points('lon', 'lat', **args).relabel('partial soil data')
full    =    full_soil_data_sites.hvplot.points('lon', 'lat', **args).relabel('full soil data')

(partial * full * gts.OSM).options(legend_position='top') +  veg_count(partial_soil_data_sites) * veg_count(full_soil_data_sites)
Out[22]:

Since there seems to be a strong geographic pattern in the availablity of soil moisture and soil temperature data, we won't use those columns in our model.

In [23]:
df = df.drop(columns=soil_data_columns)

Now we will set data to only the rows where there are no null values:

In [24]:
df = df[df.notnull().all(1)].reset_index(drop=True)
In [25]:
df['site'] = df['site'].astype('category')

Assigning roles to variables

Before we train a model to predict carbon flux globally we need to choose which variables will be included in the input to the model. For those we should only use variables that we expect to have some relationship with the variable that we are trying to predict.

In [26]:
explanatory_cols = ['lat']
data_cols = ['P_ERA', 'TA_ERA', 'PA_ERA', 'SW_IN_ERA', 'LW_IN_ERA', 'WS_ERA', 'VPD_ERA']
season_cols = [col for col in df.columns if col.startswith('season_')]
igbp_cols = [col for col in df.columns if col.startswith('igbp_')]
In [27]:
x = df[data_cols + igbp_cols + explanatory_cols + season_cols].values
y = df[y_variable].values

Scaling the Data

In [28]:
from sklearn.preprocessing import StandardScaler

# transform data matrix so 0 mean, unit variance for each feature
X = StandardScaler().fit_transform(x)

Now we are ready to train a model to predict carbon flux globally.

Training and Testing

We'll shuffle the sites and select 10% of them to be used as a test set. The rest we will use for training. Note that you might get better results using leave-one-out, but since we have a large amount of data, classical validation will be much faster.

In [29]:
from sklearn.model_selection import GroupShuffleSplit

sep = GroupShuffleSplit(train_size=0.9, test_size=0.1)
train_idx, test_idx = next(sep.split(X, y, df.site.cat.codes.values))
In [30]:
train_sites = df.site.iloc[train_idx].unique()
test_sites = df.site.iloc[test_idx].unique()

train_site_metadata = metadata[metadata.site.isin(train_sites)]
test_site_metadata = metadata[metadata.site.isin(test_sites)]

Let's make a world map showing the sites that will be used as in training and those that will be used in testing:

In [31]:
train = train_site_metadata.hvplot.points('lon', 'lat', **args).relabel('training sites')
test  = test_site_metadata.hvplot.points( 'lon', 'lat', **args).relabel('testing sites') 

(train * test * gts.OSM).options(legend_position='top') +  veg_count(train_site_metadata) * veg_count(test_site_metadata)
Out[31]:

This distribution seems reasonably uniform and unbiased, though a different random sampling might have allowed testing for each continent and all vegetation types.

Training the Regression Model

We'll construct a linear regression model using our randomly selected training sites and test sites.

In [32]:
from sklearn.linear_model import LinearRegression
In [33]:
model = LinearRegression()
model.fit(X[train_idx], y[train_idx]);

We'll create a little function to look at observed vs predicted values

In [34]:
from holoviews.operation.datashader import datashade

def result_plot(predicted, observed, title, corr=None, res=0.1):
    """Plot datashaded observed vs predicted"""
    
    corr = corr if corr is not None else np.corrcoef(predicted, observed)[0][1]
    title = '{} (correlation: {:.02f})'.format(title, corr)
    scatter = hv.Scatter((predicted, observed), 'predicted', 'observed')\
                .redim.range(predicted=(observed.min(), observed.max()))
    
    return datashade(scatter, y_sampling=res, x_sampling=res).relabel(title)
In [35]:
(result_plot(model.predict(X[train_idx]), y[train_idx], 'Training') + \
 result_plot(model.predict(X[test_idx ]), y[test_idx],  'Testing')).options('RGB', axiswise=True, width=500)
Out[35]:

Prediction at test sites

We can see how well the prediction does at each of our testing sites by making another dashboard.

In [36]:
results = []

for site in test_sites:
    site_test_idx = df[df.site == site].index
    y_hat_test = model.predict(X[site_test_idx])
    corr =  np.corrcoef(y_hat_test, y[site_test_idx])[0][1]
    
    results.append({'site': site,
                    'observed': y[site_test_idx], 
                    'predicted': y_hat_test, 
                    'corr': corr})
In [37]:
test_site_results = pd.merge(test_site_metadata, pd.DataFrame(results), 
                             on='site').set_index('site', drop=False)

Now we can set up another dashboard with just the test sites, where tapping on a given site produces a plot of the predicted vs. observed carbon flux.

First we'll set up a timeseries function.

In [38]:
def timeseries_observed_vs_predicted(site=None):
    """
    Make a timeseries plot showing the predicted/observed 
    mean carbon flux at each DOY as well as the min and max
    """
    if site:
        data = df[df.site == site].assign(predicted=test_site_results.loc[site, 'predicted'])
        corr = test_site_results.loc[site, 'corr']
        title = 'Site: {}, correlation coefficient: {:.02f}'.format(site, corr)
    else:
        data = df.assign(predicted=np.nan)
        title = 'No Selection'

    spread = data.groupby(['DOY', 'year'])[y_variable].mean().groupby('DOY').agg([np.min, np.max]) \
             .hvplot.area('DOY', 'amin', 'amax', alpha=0.2, fields={'amin': 'observed'})
    observed  = data.groupby('DOY')[y_variable ].mean().hvplot().relabel('observed')
    predicted = data.groupby('DOY')['predicted'].mean().hvplot().relabel('predicted')
    
    return (spread * observed * predicted).options(width=800).relabel(title)
In [39]:
timeseries_observed_vs_predicted(test_sites[1])
Out[39]:

Then we'll set up the points colored by correlation coefficient.

In [40]:
test_points = test_site_results.hvplot.points('lon', 'lat', geo=True, c='corr', legend=False,
                                              cmap='coolwarm_r', s=150, height=420, width=800, 
                                              hover_cols=['vegetation', 'site']).options(
                                              tools=['tap', 'hover'], line_color='black')

And put it together into a dashboard. This will look very similar to the one above.

In [41]:
test_stream = Selection1D(source=test_points)

def test_site_selection(index):
    site = None if not index else test_sites[index[0]]
    return timeseries_observed_vs_predicted(site)

one_test_site = hv.DynamicMap(test_site_selection, streams=[test_stream])
title = 'Test sites colored by correlation: tap on site to plot long-term-mean timeseries'

dash = pn.Column((test_points * gts.OSM).relabel(title), one_test_site)
dash.servable()
Out[41]:

Optional: Seasonal Prediction

Clicking on some of the sites above suggests that prediction often works well for some months and not for others. Perhaps different variables are important for prediction, depending on the season? We might be able to achieve better results if we generate separate models for each season. First we'll set up a function that computes prediction stats for a given training index, test index, array of X, array of y and array of seasons.

In [42]:
seasons = ['summer', 'fall', 'spring', 'winter']
In [43]:
def prediction_stats(train_idx, test_idx, X, y, season):
    """
    Compute prediction stats for equal length arrays X, y, and season
    split into train_idx and test_idx
    """
    pred = {}

    for s in seasons:
        season_idx = np.where(season==s)
        season_train_idx = np.intersect1d(season_idx, train_idx, assume_unique=True)
        season_test_idx = np.intersect1d(season_idx, test_idx, assume_unique=True)
        
        model = LinearRegression()
        model.fit(X[season_train_idx], y[season_train_idx])
        
        y_hat = model.predict(X[season_test_idx])
        y_test = y[season_test_idx]
        pred[s] = {'predicted': y_hat,
                   'observed': y_test,
                   'corrcoef': np.corrcoef(y_hat, y_test)[0][1],
                   'test_index': test_idx}
    return pred

Setup Dask

With dask, we can distribute tasks over cores and do parallel computation. For more information see https://dask.org/

In [44]:
from distributed import Client

client = Client()
client
Out[44]:

Client

Cluster

  • Workers: 4
  • Cores: 8
  • Memory: 17.18 GB

Now we'll scatter our data using dask and make a bunch of different splits. For each split we'll compute the predicton stats for each season.

In [45]:
futures = []
sep = GroupShuffleSplit(n_splits=50, train_size=0.9, test_size=0.1)

X_future = client.scatter(X)
y_future = client.scatter(y)
season_future = client.scatter(df['season'].values)

for i, (train_index, test_index) in enumerate(sep.split(X, y, df.site.cat.codes.values)):
    train_future = client.scatter(train_index)
    test_future = client.scatter(test_index)
    futures += [client.submit(prediction_stats, train_future, test_future,
                              X_future, y_future, season_future)]

Now that we have our computations set up in dask, we can gather the results:

In [46]:
results = client.gather(futures)

And consolidate the results for each season.

In [47]:
output = {
    s: {
        'predicted': np.concatenate([i[s]['predicted'] for i in results]),
        'observed': np.concatenate([i[s]['observed'] for i in results]),
        'test_index': np.concatenate([i[s]['test_index'] for i in results]),
        'corrcoef': np.array([i[s]['corrcoef'] for i in results])
    } for s in seasons}
In [48]:
hv.Layout([
    result_plot(output[s]['predicted'], output[s]['observed'], s, output[s]['corrcoef'].mean())
    for s in seasons]).cols(2).options('RGB', axiswise=True, width=400)
Out[48]:
In [49]:
def helper(s):
    corr = output[s]['corrcoef']
    return pd.DataFrame([corr, [s] * len(corr)], index=['corr', 'season']).T

corr = pd.concat(map(helper, seasons)).reset_index(drop=True)
In [50]:
corr.hvplot.hist(y='corr', groupby='season', bins=np.arange(0, .9, .05).tolist(), dynamic=False, width=500)
Out[50]:
In [51]:
corr.mean()
Out[51]:
corr    0.299728
dtype: float64

Suggested Next Steps

  • Can we predict certain vegetations better than others?
  • Calculate fraction of explained variance.
  • replace each FluxNet input variable with a remotely sensed (satellite imaged) quantity to predict carbon flux globally